26 research outputs found

    Limited-angle tomographic reconstruction of dense layered objects by dynamical machine learning

    Full text link
    Limited-angle tomography of strongly scattering quasi-transparent objects is a challenging, highly ill-posed problem with practical implications in medical and biological imaging, manufacturing, automation, and environmental and food security. Regularizing priors are necessary to reduce artifacts by improving the condition of such problems. Recently, it was shown that one effective way to learn the priors for strongly scattering yet highly structured 3D objects, e.g. layered and Manhattan, is by a static neural network [Goy et al, Proc. Natl. Acad. Sci. 116, 19848-19856 (2019)]. Here, we present a radically different approach where the collection of raw images from multiple angles is viewed analogously to a dynamical system driven by the object-dependent forward scattering operator. The sequence index in angle of illumination plays the role of discrete time in the dynamical system analogy. Thus, the imaging problem turns into a problem of nonlinear system identification, which also suggests dynamical learning as better fit to regularize the reconstructions. We devised a recurrent neural network (RNN) architecture with a novel split-convolutional gated recurrent unit (SC-GRU) as the fundamental building block. Through comprehensive comparison of several quantitative metrics, we show that the dynamic method improves upon previous static approaches with fewer artifacts and better overall reconstruction fidelity.Comment: 12 pages, 7 figures, 2 table

    Coordinate-based neural representations for computational adaptive optics in widefield microscopy

    Full text link
    Widefield microscopy is widely used for non-invasive imaging of biological structures at subcellular resolution. When applied to complex specimen, its image quality is degraded by sample-induced optical aberration. Adaptive optics can correct wavefront distortion and restore diffraction-limited resolution but require wavefront sensing and corrective devices, increasing system complexity and cost. Here, we describe a self-supervised machine learning algorithm, CoCoA, that performs joint wavefront estimation and three-dimensional structural information extraction from a single input 3D image stack without the need for external training dataset. We implemented CoCoA for widefield imaging of mouse brain tissues and validated its performance with direct-wavefront-sensing-based adaptive optics. Importantly, we systematically explored and quantitatively characterized the limiting factors of CoCoA's performance. Using CoCoA, we demonstrated the first in vivo widefield mouse brain imaging using machine-learning-based adaptive optics. Incorporating coordinate-based neural representations and a forward physics model, the self-supervised scheme of CoCoA should be applicable to microscopy modalities in general.Comment: 33 pages, 5 figure

    Accelerated deep self-supervised ptycho-laminography for three-dimensional nanoscale imaging of integrated circuits

    Full text link
    Three-dimensional inspection of nanostructures such as integrated circuits is important for security and reliability assurance. Two scanning operations are required: ptychographic to recover the complex transmissivity of the specimen; and rotation of the specimen to acquire multiple projections covering the 3D spatial frequency domain. Two types of rotational scanning are possible: tomographic and laminographic. For flat, extended samples, for which the full 180 degree coverage is not possible, the latter is preferable because it provides better coverage of the 3D spatial frequency domain compared to limited-angle tomography. It is also because the amount of attenuation through the sample is approximately the same for all projections. However, both techniques are time consuming because of extensive acquisition and computation time. Here, we demonstrate the acceleration of ptycho-laminographic reconstruction of integrated circuits with 16-times fewer angular samples and 4.67-times faster computation by using a physics-regularized deep self-supervised learning architecture. We check the fidelity of our reconstruction against a densely sampled reconstruction that uses full scanning and no learning. As already reported elsewhere [Zhou and Horstmeyer, Opt. Express, 28(9), pp. 12872-12896], we observe improvement of reconstruction quality even over the densely sampled reconstruction, due to the ability of the self-supervised learning kernel to fill the missing cone.Comment: 13 pages, 5 figures, 1 tabl

    Multi-dimensional computational imaging from diffraction intensity using deep neural networks

    No full text
    Diffraction of light can be found everywhere in nature, from sunlight rays fanning out from clouds to multiple colors reflected from the surface of a CD. This phenomenon of light explains any change in the path of light due to an obstacle and is of particular significance as it allows us to see transparent (or pure-phase) objects, e.g. biological cells under visible-wavelength light or integrated circuits under X-rays, with proper exploitation of the phenomenon. However, cameras only measure the intensity of the diffracted light, which makes the camera measurements incomplete due to the loss of phase information. Thus, this thesis addresses the reconstruction of multi-dimensional phase information from diffraction intensities with a regularized inversion using deep neural networks for two- and three-dimensional applications. The inversion process begins with the definition of a forward physical model that relates a diffraction intensity to a phase object and then involves a physics-informing step (or equivalently, physics prior) to deep neural networks, if applicable. In this thesis, two-dimensional wavefront aberrations are retrieved for high-contrast imaging of exoplanets using a deep residual neural network, and transparent planar objects behind dynamic scattering media are revealed by a recurrent neural network, both in an end-to-end training fashion. Next, a multi-layered, three-dimensional glass phantom of integrated circuits is reconstructed under the limited-angle phase computed tomography geometry with visible-wavelength laser illumination using a dynamical machine learning framework. Furthermore, a deep neural network regularization is deployed for the reconstruction of real integrated circuits from far-field diffraction intensities under the ptychographic X-ray computed tomography geometry with partially coherent synchrotron X-ray illumination.Ph.D

    Probability of error as an image metric for the assessment of tomographic reconstruction of dense-layered binary-phase objects

    No full text
    IARPA (Contract FA8650-17-C-9113

    Phase Extraction Neural Network (PhENN) with Coherent Modulation Imaging (CMI) for phase retrieval at low photon counts

    No full text
    © 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement. Imaging with low-dose light is of importance in various fields, especially when minimizing radiation-induced damage onto samples is desirable. The raw image captured at the detector plane is then predominantly a Poisson random process with Gaussian noise added due to the quantum nature of photo-electric conversion. Under such noisy conditions, highly ill-posed problems such as phase retrieval from raw intensity measurements become prone to strong artifacts in the reconstructions; a situation that deep neural networks (DNNs) have already been shown to be useful at improving. Here, we demonstrate that random phase modulation on the optical field, also known as coherent modulation imaging (CMI), in conjunction with the phase extraction neural network (PhENN) and a Gerchberg-Saxton-Fienup (GSF) approximant, further improves resilience to noise of the phase-from-intensity imaging problem. We offer design guidelines for implementing the CMI hardware with the proposed computational reconstruction scheme and quantify reconstruction improvement as function of photon count.Southern University of Science and Technology (6941806)Intelligence Advanced Research Projects Activity (FA8650-17-C-9113)National Natural Science Foundation of China (11775105

    Dynamical machine learning volumetric reconstruction of objects’ interiors from limited angular views

    No full text
    Limited-angle tomography of an interior volume is a challenging, highly ill-posed problem with practical implications in medical and biological imaging, manufacturing, automation, and environmental and food security. Regularizing priors are necessary to reduce artifacts by improving the condition of such problems. Recently, it was shown that one effective way to learn the priors for strongly scattering yet highly structured 3D objects, e.g. layered and Manhattan, is by a static neural network [Goy et al. Proc. Natl. Acad. Sci. 116, 19848–19856 (2019)]. Here, we present a radically different approach where the collection of raw images from multiple angles is viewed analogously to a dynamical system driven by the object-dependent forward scattering operator. The sequence index in the angle of illumination plays the role of discrete time in the dynamical system analogy. Thus, the imaging problem turns into a problem of nonlinear system identification, which also suggests dynamical learning as a better fit to regularize the reconstructions. We devised a Recurrent Neural Network (RNN) architecture with a novel Separable-Convolution Gated Recurrent Unit (SC-GRU) as the fundamental building block. Through a comprehensive comparison of several quantitative metrics, we show that the dynamic method is suitable for a generic interior-volumetric reconstruction under a limited-angle scheme. We show that this approach accurately reconstructs volume interiors under two conditions: weak scattering, when the Radon transform approximation is applicable and the forward operator well defined; and strong scattering, which is nonlinear with respect to the 3D refractive index distribution and includes uncertainty in the forward operator.Intelligence Advanced Research Projects Activity (Grant FA8650-17-C-9113

    Learning to synthesize: robust phase retrieval at low photon counts

    No full text
    The quality of inverse problem solutions obtained through deep learning is limited by the nature of the priors learned from examples presented during the training phase. Particularly in the case of quantitative phase retrieval, spatial frequencies that are underrepresented in the training database, most often at the high band, tend to be suppressed in the reconstruction. Ad hoc solutions have been proposed, such as pre-amplifying the high spatial frequencies in the examples; however, while that strategy improves the resolution, it also leads to high-frequency artefacts, as well as low-frequency distortions in the reconstructions. Here, we present a new approach that learns separately how to handle the two frequency bands, low and high, and learns how to synthesize these two bands into full-band reconstructions. We show that this “learning to synthesize” (LS) method yields phase reconstructions of high spatial resolution and without artefacts and that it is resilient to high-noise conditions, e.g., in the case of very low photon flux. In addition to the problem of quantitative phase retrieval, the LS method is applicable, in principle, to any inverse problem where the forward operator treats different frequency bands unevenly, i.e., is ill-posed. ©2020, The Author(s)Intelligence Advanced Research Projects Activity (IARPA) grant (No. FA8650-17-C-9113

    Recurrent neural network reveals transparent objects through scattering media

    No full text
    © 2021 Optical Society of America. Scattering generally worsens the condition of inverse problems, with the severity severity depending on the statistics of the refractive index gradient and contrast. Removing scattering artifacts from images has attracted much work in the literature, including recently the use of static neural networks. S. Li et al. [Optica 5(7), 803 (2018)] trained a convolutional neural network to reveal amplitude objects hidden by a specific diffuser; whereas Y. Li et al. [Optica 5(10), 1181 (2018)] were able to deal with arbitrary diffusers, as long as certain statistical criteria were met. Here, we propose a novel dynamical machine learning approach for the case of imaging phase objects through arbitrary diffusers. The motivation is to strengthen the correlation among the patterns during the training and to reveal phase objects through scattering media. We utilize the on-axis rotation of a diffuser to impart dynamics and utilize multiple speckle measurements from different angles to form a sequence of images for training. Recurrent neural networks (RNN) embedded with the dynamics filter out useful information and discard the redundancies, thus quantitative phase information in presence of strong scattering. In other words, the RNN effectively averages out the effect of the dynamic random scattering media and learns more about the static pattern. The dynamical approach reveals transparent images behind the scattering media out of speckle correlation among adjacent measurements in a sequence. This method is also applicable to other imaging applications that involve any other spatiotemporal dynamics
    corecore